This study compared the results from nine automated essay scoring (AES) engines on eight essay scoring prompts drawn from six states that annually administer high-stakes writing assessments. H1: Human Rater 1 H2: Human Rater 2 AIR: American Institutes for Research CMU: Carnegie Mellon University CTB: CTB McGraw-Hill ETS: Educational Testing Service MI: Measurement, Inc. MM: MetaMetrics PKT: Pearson Knowledge Technologies PM: Pacific Metrics VL: Vantage Learning The results demonstrated that overall, automated essay scoring was capable of producing scores similar to human scores for extended-response writing items.
Copy and paste this URL into your WordPress site to embed
Copy and paste this code into your site to embed